Marshall’s lemma for convex density estimation
نویسندگان
چکیده
Marshall’s [Nonparametric Techniques in Statistical Inference (1970) 174–176] lemma is an analytical result which implies √ n–consistency of the distribution function corresponding to the Grenander [Skand. Aktuarietidskr. 39 (1956) 125–153] estimator of a non-decreasing probability density. The present paper derives analogous results for the setting of convex densities on [0,∞).
منابع مشابه
Numerical simulation of Laminar Free Convection Heat Transfer around Isothermal Concave and Convex Body Shapes
In the present research, free convection heat transfer from isothermal concave and convex body shapes is studied numerically. The body shapes investigated here, are bi-sphere, cylinder, prolate and cylinder with hemispherical ends; besides, they have the same height over width (H/D = 2). A Numerical simulation is implemented to obtain heat transfer and fluid flow from all of the geometries in a...
متن کاملA Note on Covering by Convex Bodies
A classical theorem of Rogers states that for any convex body K in n-dimensional Euclidean space there exists a covering of the space by translates of K with density not exceeding n log n+n log log n+5n. Rogers’ theorem does not say anything about the structure of such a covering. We show that for sufficiently large values of n the same bound can be attained by a covering which is the union of ...
متن کاملCAKE: Convex Adaptive Kernel Density Estimation
In this paper we present a generalization of kernel density estimation called Convex Adaptive Kernel Density Estimation (CAKE) that replaces single bandwidth selection by a convex aggregation of kernels at all scales, where the convex aggregation is allowed to vary from one training point to another, treating the fundamental problem of heterogeneous smoothness in a novel way. Learning the CAKE ...
متن کاملStochastic Mirror Descent with Inexact Prox - Mapping in Density
Appendix A Strong convexity As we discussed, the posterior from Bayes’s rule could be viewed as the optimal of an optimization problem in Eq (1). We will show that the objective function is strongly convex w.r.t KL-divergence. Proof for Lemma 1. The lemma directly results from the generalized Pythagaras theorem for Bregman divergence. Particularly, for KL-divergence, we have KL(q 1 ||q) = KL(q ...
متن کاملGeneralizing Bottleneck Problems
Given a pair of random variables (X,Y ) ∼ PXY and two convex functions f1 and f2, we introduce two bottleneck functionals as the lower and upper boundaries of the two-dimensional convex set that consists of the pairs (If1(W ;X), If2(W ;Y )), where If denotes f -information and W varies over the set of all discrete random variables satisfying the Markov condition W → X → Y . Applying Witsenhause...
متن کامل